22 research outputs found

    Local search based evolutionary multi-objective optimization algorithm for constrained and unconstrained problems

    Get PDF
    Evolutionary multi-objective optimization algorithms are commonly used to obtain a set of non-dominated solutions for over a decade. Recently, a lot of emphasis have been laid on hybridizing evolutionary algorithms with MCDM and mathematical programming algorithms to yield a computationally efficient and convergent procedure. In this paper, we test an augmented local search based EMO procedure rigorously on a test suite of constrained and unconstrained multi-objective optimization problems. The success of our approach on most of the test problems not only provides confidence but also stresses the importance of hybrid evolutionary algorithms in solving multi-objective optimization problems

    A survey on handling computationally expensive multiobjective optimization problems with evolutionary algorithms

    Get PDF
    This is the author accepted manuscript. The final version is available from Springer Verlag via the DOI in this record.Evolutionary algorithms are widely used for solving multiobjective optimization problems but are often criticized because of a large number of function evaluations needed. Approximations, especially function approximations, also referred to as surrogates or metamodels are commonly used in the literature to reduce the computation time. This paper presents a survey of 45 different recent algorithms proposed in the literature between 2008 and 2016 to handle computationally expensive multiobjective optimization problems. Several algorithms are discussed based on what kind of an approximation such as problem, function or fitness approximation they use. Most emphasis is given to function approximation-based algorithms. We also compare these algorithms based on different criteria such as metamodeling technique and evolutionary algorithm used, type and dimensions of the problem solved, handling constraints, training time and the type of evolution control. Furthermore, we identify and discuss some promising elements and major issues among algorithms in the literature related to using an approximation and numerical settings used. In addition, we discuss selecting an algorithm to solve a given computationally expensive multiobjective optimization problem based on the dimensions in both objective and decision spaces and the computation budget available.The research of Tinkle Chugh was funded by the COMAS Doctoral Program (at the University of Jyväskylä) and FiDiPro Project DeCoMo (funded by Tekes, the Finnish Funding Agency for Innovation), and the research of Dr. Karthik Sindhya was funded by SIMPRO project funded by Tekes as well as DeCoMo

    Hybridization of SBX based NSGA-II and sequential quadratic programming for solving multi-objective optimization problems

    Get PDF
    Most real-world search and optimization problems involve multiple conflicting objectives and results in a Pareto-optimal set. Various multi-objective optimization algorithms have been proposed for solving such problems with the goals of finding as many trade-off solutions as possible and maintaining diversity among them. Since last decade, evolutionary multi-objective optimization (EMO) algorithms have been applied successfully to various test and real-world optimization problems. These population based algorithms provide a diverse set of non-dominated solutions. The obtained non-dominated set is close to the true Pareto-optimal front but it's convergence to the true Pareto-optimal front is not guaranteed. Hence to ensure the same, a local search method using classical algorithm can be applied. In the present work, SBX based NSGA-II is used as a population based approach and the sequential quadratic programming (SQP) method is used as a local search procedure. This hybridization of evolutionary and classical algorithms approach provides a confidence of converging near to the true Pareto-optimal set with a good diversity. The proposed procedure is successfully applied to 13 test problems consisting two, three and five objectives. The obtained results validate our motivation of hybridizing evolutionary and classical methods

    Deciphering innovative principles for optimal electric brushless D.C. permanent magnet motor design

    No full text
    This paper shows how a routine design optimization task can be enhanced to decipher important and innovative design principles which shall provide far-reaching knowledge about the problem at hand. Although the dasiainnovizationpsila task for this purpose was proposed by the first author elsewhere, the application to a brushless D.C. permanent magnet motor design is the first real application of the innovization concept to a discrete optimization problem. The model for cost and peak-torque objectives and associated constraints are borrowed from an existing study. The extent of knowledge gained in designing high-performing yet low-cost motors achieved in this study is phenomenal and should motivate other practitioners to pursue similar studies in other design and optimization related activities

    A Surrogate-assisted Reference Vector Guided Evolutionary Algorithm for Computationally Expensive Many-objective Optimization

    No full text
    We propose a surrogate-assisted reference vector guided evolutionary algorithm for computationally expensive optimization problems with more than three objectives. The proposed algorithm is based on a recently developed evolutionary algorithm for many-objective optimization that relies on a set of adaptive reference vectors for selection. The proposed surrogateassisted evolutionary algorithm uses Kriging to approximate each objective function to reduce the computational cost. In managing the Kriging models, the algorithm focuses on the balance of diversity and convergence by making use of the uncertainty information in the approximated objective values given by the Kriging models, the distribution of the reference vectors as well as the location of the individuals. In addition, we design a strategy for choosing data for training the Kriging model to limit the computation time without impairing the approximation accuracy. Empirical results on comparing the new algorithm with the state-of-the-art surrogate-assisted evolutionary algorithms on a number of benchmark problems demonstrate the competitiveness of the proposed algorithm

    A New Hybrid Mutation Operator for Multiobjective Optimization with Differential Evolution

    No full text
    Differential evolution has become one of the most widely used evolution- ary algorithms in multiobjective optimization. Its linear mutation operator is a sim- ple and powerful mechanism to generate trial vectors. However, the performance of the mutation operator can be improved by including a nonlinear part. In this pa- per, we propose a new hybrid mutation operator consisting of a polynomial based operator with nonlinear curve tracking capabilities and the differential evolution’s original mutation operator, to be efficiently able to handle various interdependencies between decision variables. The resulting hybrid operator is straightforward to implement and can be used within most evolutionary algorithms. Particularly, it can be used as a replacement in all algorithms utilizing the original mutation operator of differential evolution. We demonstrate how the new hybrid operator can be used by incorporating it into MOEA/D, a winning evolutionary multiobjective algorithm in a recent competition. The usefulness of the hybrid operator is demonstrated with extensive numerical experiments showing improvements in performance compared to the previous state of the art.peerReviewe
    corecore